Goto

Collaborating Authors

 develop ai weapon


US has 'moral imperative' to develop AI weapons, says panel

The Guardian

The US should not agree to ban the use or development of autonomous weapons powered by artificial intelligence (AI) software, a government-appointed panel has said in a draft report for Congress. The panel, led by former Google chief executive Eric Schmidt, on Tuesday concluded two days of public discussion about how the world's biggest military power should consider AI for national security and technological advancement. Its vice-chairman, Robert Work, a former deputy secretary of defense, said autonomous weapons are expected to make fewer mistakes than humans do in battle, leading to reduced casualties or skirmishes caused by target misidentification. "It is a moral imperative to at least pursue this hypothesis," he said. For about eight years, a coalition of non-governmental organisations has pushed for a treaty banning "killer robots", saying human control is necessary to judge attacks' proportionality and assign blame for war crimes.


Google won't develop AI weapons, announces new ethical strategy Internet of Business

#artificialintelligence

Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights". The move follows widespread internal and external criticism of Google's involvement in Project Maven, the Pentagon's aerial battlefield intelligence programme, which some saw as a step towards the weaponisation of AI.


Google pledges not to develop AI weapons, but says it will still work with the military

#artificialintelligence

Google has released a set of principles to guide its work in artificial intelligence, making good on a promise to do so last month following controversy over its involvement in a Department of Defense drone project. The document, titled "Artificial Intelligence at Google: our principles," does not directly reference this work, but makes clear that the company will not develop AI for use in weaponry. It also outlines a number of broad guidelines for AI, touching issues like bias, privacy, and human oversight. While the new principles forbid the development of AI weaponry, they state that Google will continue to work with the military "in many other areas." Speaking to The Verge, a Google representative said that had these principles been published earlier, the company would likely not have become involved in the Pentagon's drone project, which used AI to analyze surveillance footage.